linear estimator - definition. What is linear estimator
Diclib.com
قاموس ChatGPT
أدخل كلمة أو عبارة بأي لغة 👆
اللغة:     

ترجمة وتحليل الكلمات عن طريق الذكاء الاصطناعي ChatGPT

في هذه الصفحة يمكنك الحصول على تحليل مفصل لكلمة أو عبارة باستخدام أفضل تقنيات الذكاء الاصطناعي المتوفرة اليوم:

  • كيف يتم استخدام الكلمة في اللغة
  • تردد الكلمة
  • ما إذا كانت الكلمة تستخدم في كثير من الأحيان في اللغة المنطوقة أو المكتوبة
  • خيارات الترجمة إلى الروسية أو الإسبانية، على التوالي
  • أمثلة على استخدام الكلمة (عدة عبارات مع الترجمة)
  • أصل الكلمة

%ما هو (من)٪ 1 - تعريف

THEOREM
Best linear unbiased estimator; Blue (statistics); Gauss-Markov theorem; BLUE; Gauss-Markov assumptions; Gauss–Markov assumptions; Best Linear Unbiased Estimator; Gauss markov theorem; Gauss-Markow least squares theorem; Gauss–Markov–Aitken theorem; Gauss-Markov-Aitken theorem; Linear estimator; Gauss–Markov model; Gauss-Markov model; Spherical error

Estimator         
  • thumb
  • thumb
  • thumb
USED IN MATHEMATICAL STATISTICS TO DETERMINE AN ESTIMATED VALUE
Efficiency bound; Restricted estimate; Unrestricted estimate; Asymptotically unbiased; Estimators; Asymptotically normal estimator; Parameter estimate; Universal estimator; Estimated value; Statistical estimate; Estimate (statistics)
·noun One who estimates or values; a valuer.
Estimator         
  • thumb
  • thumb
  • thumb
USED IN MATHEMATICAL STATISTICS TO DETERMINE AN ESTIMATED VALUE
Efficiency bound; Restricted estimate; Unrestricted estimate; Asymptotically unbiased; Estimators; Asymptotically normal estimator; Parameter estimate; Universal estimator; Estimated value; Statistical estimate; Estimate (statistics)
In statistics, an estimator is a rule for calculating an estimate of a given quantity based on observed data: thus the rule (the estimator), the quantity of interest (the estimand) and its result (the estimate) are distinguished. For example, the sample mean is a commonly used estimator of the population mean.
linear map         
  • The function f:\R^2 \to \R^2 with f(x, y) = (2x, y) is a linear map. This function scales the x component of a vector by the factor 2.
  • The function f(x, y) = (2x, y) is additive: It doesn't matter whether vectors are first added and then mapped or whether they are mapped and finally added: f(\mathbf a + \mathbf b) = f(\mathbf a) + f(\mathbf b)
  • The function f(x, y) = (2x, y) is homogeneous: It doesn't matter whether a vector is first scaled and then mapped or first mapped and then scaled: f(\lambda \mathbf a) = \lambda f(\mathbf a)
MAPPING THAT PRESERVES THE OPERATIONS OF ADDITION AND SCALAR MULTIPLICATION
Linear operator; Linear mapping; Linear transformations; Linear operators; Linear transform; Linear maps; Linear isomorphism; Linear isomorphic; Linear Transformation; Linear Transformations; Linear Operator; Homogeneous linear transformation; User:The Uber Ninja/X3; Linear transformation; Bijective linear map; Nonlinear operator; Linear Schrödinger Operator; Vector space homomorphism; Vector space isomorphism; Linear extension of a function; Linear extension (linear algebra); Extend by linearity; Linear endomorphism
<mathematics> (Or "linear transformation") A function from a vector space to a vector space which respects the additive and multiplicative structures of the two: that is, for any two vectors, u, v, in the source vector space and any scalar, k, in the field over which it is a vector space, a linear map f satisfies f(u+kv) = f(u) + kf(v). (1996-09-30)

ويكيبيديا

Gauss–Markov theorem

In statistics, the Gauss–Markov theorem (or simply Gauss theorem for some authors) states that the ordinary least squares (OLS) estimator has the lowest sampling variance within the class of linear unbiased estimators, if the errors in the linear regression model are uncorrelated, have equal variances and expectation value of zero. The errors do not need to be normal, nor do they need to be independent and identically distributed (only uncorrelated with mean zero and homoscedastic with finite variance). The requirement that the estimator be unbiased cannot be dropped, since biased estimators exist with lower variance. See, for example, the James–Stein estimator (which also drops linearity), ridge regression, or simply any degenerate estimator.

The theorem was named after Carl Friedrich Gauss and Andrey Markov, although Gauss' work significantly predates Markov's. But while Gauss derived the result under the assumption of independence and normality, Markov reduced the assumptions to the form stated above. A further generalization to non-spherical errors was given by Alexander Aitken.